Goto

Collaborating Authors

 dual horn clause


On LLM-generated Logic Programs and their Inference Execution Methods

Tarau, Paul

arXiv.org Artificial Intelligence

While the multi-step dialog model initiated by ChatGPT is now available from a few dozen online or locally run open source and closed source LLMs, it does not cover the need to efficiently extract salient information from an LLMs "parameter-memory" that encapsulates in a heavily compressed form the result of training the model on trillions of documents and multimodal data. Steps in this direction have been taken, relying on ground-truth involving additional information sources (e.g., collections of reference documents or use of web search queries). Among them, we mention work on improving performance of Retrieval Augmented Generation (RAG) systems [7] by recursively embedding, clustering, and summarizing chunks of text for better hierarchical LLM-assisted summarization [15], multi-agent hybrid LLM and local computation aggregators [3] and deductive verifiers of chain of thoughts reasoning [9]. A more direct approach is recursion on LLM queries, by chaining the LLM's distilled output as input to a next step and casting its content and interrelations in the form of logic programs, to automate and focus this information extraction with minimal human input [18, 20]. Like in the case of typical RAG architectures [7, 15], this process can rely on external ground truth but it can also use new LLM client instances as "oracles" deciding the validity of the synthesized rules or facts. With focus on automation of this unmediated salient knowledge extraction from the LLM's parameter memory and its encapsulation in the form of synthesized logic programming code, we will extend in this paper the work initiated in [18, 20] with: new LLM input-output chaining mechanisms new types of generated logic programs new relational representations elicited from LLM output steps 2 On LLM-generated Logic Programs and their Inference Execution Methods scalable execution mechanisms that accommodate very large logic programs at deeper recursion levels soft-unification based execution of LLM-generated logic programs as a principled encapsulation of the RAG retrieval process The rest of the paper is organized as follows. Section 2 overviews the DeepLLM architecture described in [20] and its new extensions supporting the results in this paper.


Through the Looking Glass, and what Horn Clause Programs Found There

Tarau, Paul

arXiv.org Artificial Intelligence

Dual Horn clauses mirror key properties of Horn clauses. This paper explores the "other side of the looking glass" to reveal some expected and unexpected symmetries and their practical uses. We revisit Dual Horn clauses as enablers of a form of constructive negation that supports goal-driven forward reasoning and is valid both intuitionistically and classically. In particular, we explore the ability to falsify a counterfactual hypothesis in the context of a background theory expressed as a Dual Horn clause program. With Dual Horn clause programs, by contrast to negation as failure, the variable bindings in their computed answers provide explanations for the reasons why a statement is successfully falsified. Moreover, in the propositional case, by contrast to negation as failure as implemented with stable models semantics in ASP systems, and similarly to Horn clause programs, Dual Horn clause programs have polynomial complexity. After specifying their execution model with a metainterpreter, we devise a compilation scheme from Dual Horn clause programs to Horn clause programs, ensuring their execution with no performance penalty and we design the embedded SymLP language to support combined Horn clause and Dual Horn clause programs. As a (motivating) application, we cast LLM reasoning chains into propositional Horn and Dual Horn clauses that work together to constructively prove and disprove goals and enhance Generative AI with explainability of reasoning chains. Keywords: Dual Horn clauses; constructive negation; counterfactual reasoning; theory falsification; LLM generated logic programs; metainterpretation and compilation to Prolog.